Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Consider public health officials aiming to spread awareness about a new vaccine in a community interconnected by a social network. How can they distribute information with minimal resources, so as to avoid polarization and ensure community-wide convergence of opinion? To tackle such challenges, we initiate the study of sample complexity of opinion formation in networks. Our framework is built on the recognized opinion formation game, where we regard each agent’s opinion as a data-derived model, unlike previous works that treat opinions as data-independent scalars. The opinion model for every agent is initially learned from its local samples and evolves game-theoretically as all agents communicate with neighbors and revise their models towards an equilibrium. Our focus is on the sample complexity needed to ensure that the opinions converge to an equilibrium such that every agent’s final model has low generalization error. Our paper has two main technical results. First, we present a novel polynomial time optimization framework to quantify the total sample complexity for arbitrary networks, when the underlying learning problem is (generalized) linear regression. Second, we leverage this optimization to study the network gain which measures the improvement of sample complexity when learning over a network compared to that in isolation. Towards this end, we derive network gain bounds for various network classes including cliques, star graphs, and random regular graphs. Additionally, our framework provides a method to study sample distribution within the network, suggesting that it is sufficient to allocate samples inversely to the degree. Empirical results on both synthetic and real-world networks strongly support our theoretical findings.more » « lessFree, publicly-accessible full text available April 11, 2026
-
Meka, Raghu (Ed.)We introduce Online Balanced Allocation of Dynamic Components (OBADC), a problem motivated by the practical challenge of dynamic resource allocation for large-scale distributed applications. In OBADC, we need to allocate a dynamic set of at most k𝓁 vertices (representing processes) in 𝓁 > 0 clusters. We consider an over-provisioned setup in which each cluster can hold at most k(1+ε) vertices, for an arbitrary constant ε > 0. The communication requirements among the vertices are modeled by the notion of a dynamically changing component, which is a subset of vertices that need to be co-located in the same cluster. At each time t, a request r_t of one of the following types arrives: 1) insertion of a vertex v forming a singleton component v at unit cost. 2) merge of (u,v) requiring that the components containing u and v be merged and co-located thereafter. 3) deletion of an existing vertex v at zero cost. Before serving any request, an algorithm can migrate vertices from one cluster to another, at a unit migration cost per vertex. We seek an online algorithm to minimize the total migration cost incurred for an arbitrary request sequence σ = (r_t)_{t > 0}, while simultaneously minimizing the number of clusters utilized. We analyze competitiveness with respect to an optimal clairvoyant offline algorithm with identical (over-provisioned) capacity constraints. We give an O(log k)-competitive algorithm for OBADC, and a matching lower-bound. The number of clusters utilized by our algorithm is always within a (2+ε) factor of the minimum. Furthermore, in a resource augmented setting where the optimal offline algorithm is constrained to capacity k per cluster, our algorithm obtains O(log k) competitiveness and utilizes a number of clusters within (1+ε) factor of the minimum. We also consider OBADC in the context of machine-learned predictions, where for each newly inserted vertex v at time t: i) with probability η > 0, the set of vertices (that exist at time t) in the component of v is revealed and, ii) with probability 1-η, no information is revealed. For OBADC with predictions, we give a O(1)-consistent and O(min(log 1/(η), log k))-robust algorithm.more » « less
An official website of the United States government

Full Text Available